fourier transform
Spectral methods: crucial for machine learning, natural for quantum computers?
Belis, Vasilis, Bowles, Joseph, Gupta, Rishabh, Peters, Evan, Schuld, Maria
This article presents an argument for why quantum computers could unlock new methods for machine learning. We argue that spectral methods, in particular those that learn, regularise, or otherwise manipulate the Fourier spectrum of a machine learning model, are often natural for quantum computers. For example, if a generative machine learning model is represented by a quantum state, the Quantum Fourier Transform allows us to manipulate the Fourier spectrum of the state using the entire toolbox of quantum routines, an operation that is usually prohibitive for classical models. At the same time, spectral methods are surprisingly fundamental to machine learning: A spectral bias has recently been hypothesised to be the core principle behind the success of deep learning; support vector machines have been known for decades to regularise in Fourier space, and convolutional neural nets build filters in the Fourier space of images. Could, then, quantum computing open fundamentally different, much more direct and resource-efficient ways to design the spectral properties of a model? We discuss this potential in detail here, hoping to stimulate a direction in quantum machine learning research that puts the question of ``why quantum?'' first.
- North America > United States > New York (0.04)
- North America > United States > Maryland (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Support Vector Machines (0.54)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.49)
High-Resolution Tensor-Network Fourier Methods for Exponentially Compressed Non-Gaussian Aggregate Distributions
Rodríguez-Aldavero, Juan José, García-Ripoll, Juan José
Its low-rank QTT structure arises from intrinsic spectral smoothness in continuous models, or from spectral energy concentration as the number of components D grows in discrete models. We demonstrate this on weighted sums of Bernoulli and lognormal random variables. In the latter, the approach reaches high-resolution discretizations of N = 230 frequency modes on standard hardware, far beyond the N =224 ceiling of dense implementations. These compressed representations enable efficient computation of Value at Risk (VaR) and Expected Shortfall (ES), supporting applications in quantitative finance and beyond. I. INTRODUCTION Weighted sums of independent random variables constitute a basic probabilistic model, describing macroscopic behavior arising from the aggregation of microscopic stochastic components. These models arise in a wide range of applications. Their probability distribution generally lacks a closed-form expression, and their evaluation involves multidimensional convolution integrals that are susceptible to the curse of dimensionality. Consequently, evaluating these models relies on specializednumericalmethods. Whilethese methods have been adapted for discrete settings [18, 19], they are frequently hampered by persistent Gibbs oscillations, which arise from distributional discontinuities and preclude uniform convergence [20, 21]. No existing method simultaneously achieves an accurate approximation of the exact, fully non-Gaussian target distribution while remaining scalable to larger, practically relevant system sizes. In this work, we introduce a new algorithm that combines the Fourier spectral method with tensor-network techniques.
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Spain > Galicia > Madrid (0.04)
- (3 more...)
Utilizing Image Transforms and Diffusion Models for Generative Modeling of Short and Long Time Series
Lately, there has been a surge in interest surrounding generative modeling of time series data. Most existing approaches are designed either to process short sequences or to handle long-range sequences. This dichotomy can be attributed to gradient issues with recurrent networks, computational costs associated with transformers, and limited expressiveness of state space models. Towards a unified generative model for varying-length time series, we propose in this work to transform sequences into images.
- North America > United States > California > San Francisco County > San Francisco (0.04)
- Pacific Ocean > North Pacific Ocean > San Francisco Bay (0.04)
- Asia > China > Beijing > Beijing (0.04)
- (3 more...)
- Research Report > Experimental Study (0.93)
- Research Report > New Finding (0.68)
Learning Functional Transduction: S.I. Contents
We propose below the proofs of the results presented in the main text. RKBS developed in (Zhang et al., 2009; Song et al., 2013) to develop the notion of vector-valued (Giles, 1967). " 0, @ j ď n, @ u P U (9) which allows us to say that O P RKBS (Corollary 3.2 of Zhang (2013)) that we recall hereafter: We first define for any linear operator We show our result in the case J=1 and can be directly extended to any cardinality J. Specifically, we tested three expressions: Exp. The two first expressions yield similar result in the ADR experiment at an equal compute cost. We also tried a'branch' and'trunk' networks formulation of the model as in DeepONet (Lu T able S.2: Summary of the architectural hyperparameters used to build the Transducer in the four experiments. 'Depth' corresponds to network number of layers, 'MLP dim' to the dimensionality of the hidden layer As stated, we used for all experiments, the same meta-training procedure. T able S.3: Summary of the meta-learning hyperparameters used to meta-train the Transducer in our four Figure S.1: Examples of sampled functions δ p xq and ν px q used to build operators O We train Tranducers for 200K gradient steps. Flow library (Holl et al., 2020) that allows for batched and differentiable simulations of fluid dynamics Figure S.5: Magnitude of the complex coefficients of the Fourier transform of an exemple pair of input and In order to tackle the high-resolution climate modeling experiment, we take inspiration from Pathak et al. (2022), which combines neural operators with the patch splitting L " 12, in order to match number of trainable parameters.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > China (0.04)
- Asia > Middle East > Jordan (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Asia > China > Anhui Province > Hefei (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.95)
- Information Technology > Data Science > Data Mining (0.93)
- Information Technology > Artificial Intelligence > Robots (0.93)
- North America > Canada > Ontario > Toronto (0.14)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)